# Cross-Encoder Architecture
Namaa ARA Reranker V1
Apache-2.0
A model specifically designed for Arabic reranking tasks, capable of accurately evaluating the relevance between queries and passages.
Text Embedding
Transformers Arabic

N
NAMAA-Space
56
4
Ruri Reranker Stage1 Large
Apache-2.0
Ruri-Reranker is a Japanese text re-ranking model based on sentence-transformers, specifically designed to optimize the relevance ranking between queries and documents.
Text Embedding Japanese
R
cl-nagoya
23
1
Ruri Reranker Stage1 Base
Apache-2.0
Ruri Reranker is a Japanese text reranking model based on Transformer architecture, specifically designed to optimize the ranking quality of retrieval results.
Text Embedding Japanese
R
cl-nagoya
26
0
Ruri Reranker Small
Apache-2.0
Ruri-Reranker is a reranking model specifically optimized for Japanese text, based on the sentence-transformers architecture, effectively improving the relevance ranking of search results.
Text Embedding Japanese
R
cl-nagoya
116
2
Ruri Reranker Stage1 Small
Apache-2.0
The Ruri Reranker is a general-purpose Japanese reranking model specifically designed to improve the relevance ranking of Japanese text retrieval results. The small version maintains high performance while having a smaller parameter count.
Text Embedding Japanese
R
cl-nagoya
25
0
Monobert Legal French
MIT
French text classification model based on CamemBERT architecture, specifically designed for paragraph reordering tasks in the legal domain
Text Classification French
M
maastrichtlawtech
802
1
Japanese Reranker Cross Encoder Large V1
MIT
A high-performance cross-encoder model optimized for Japanese text reranking tasks, featuring a 24-layer architecture with 1024 hidden units
Text Embedding Japanese
J
hotchpotch
2,959
15
Medcpt Cross Encoder
Other
MedCPT-Cross-Encoder is a sequence classification model for biomedical information retrieval, particularly suitable for relevance ranking of medical-related articles.
Text Embedding
Transformers

M
ncbi
61.12k
18
Ag Nli DeTS Sentence Similarity V1
Apache-2.0
This model is trained using the Cross-Encoder class from SentenceTransformers to predict the semantic similarity score between two sentences.
Text Embedding
Transformers Supports Multiple Languages

A
abbasgolestani
982
0
Crossencoder Camembert Large
Apache-2.0
This is a French sentence similarity calculation model based on CamemBERT, used to predict the semantic similarity score between two sentences.
Text Embedding
Transformers French

C
dangvantuan
167
16
Featured Recommended AI Models